专利摘要:
An image acquisition device (100) includes a photocathode (110), converting an incident photon flux into an electron stream, a sensor (130), and processing means (14). The device according to the invention comprises a matrix (120) of elementary filters, each associated with at least one pixel of the sensor, said matrix being disposed upstream of the photocathode. The matrix comprises primary color filters, and transparent filters, called panchromatic filters. The processing means (140) are adapted to: - calculate a quantity, called the useful magnitude (F), to determine if at least one zone of the sensor is in conditions of low or high illumination, the useful magnitude being representative of an average surface flux of photons or electrons detected on a set of panchromatic pixels of the sensor; - Only if said area is in conditions of high illumination, form an image of said area from the primary color pixels of this area.
公开号:FR3026223A1
申请号:FR1458903
申请日:2014-09-22
公开日:2016-03-25
发明作者:Damien Letexier;Franck Robert;Geoffroy Deltel
申请人:Photonis France SAS;
IPC主号:
专利说明:

[0001] APPARATUS FOR ACQUIRING PHOTOCATHODE BIMODE IMAGES. TECHNICAL FIELD The present invention relates to the field of night vision image acquisition devices, com + taking a photocathode adapted to convert a stream of photons into a stream of electrons. The field of the invention is more particularly that of such devices, using matrix color filters. STATE OF THE PRIOR ART Various prior art devices for acquiring night vision images, including a photocathode, are known in the prior art.
[0002] Such a device is for example an image intensifier tube, comprising a photocathode, adapted to convert an incident flux of photons into an initial flow of electrons. This initial flow of electrons propagates inside the intensifier tube, where it is accelerated by a first electrostatic field towards multiplication means.
[0003] These multiplying means receive said initial flow of electrons, and in response provide a secondary electron flow. Each initial electron incident on an input side of the multiplication means, causes the emission of several secondary electrons on the side of the output face of these same means. Thus, an intense secondary electron flux is generated from a low initial electron flux, and thus ultimately from very low intensity light radiation.
[0004] The secondary electron flux is accelerated by a third electrostatic field in the direction of a phosphor screen, which converts the secondary electron flux into a photon flux. Thanks to the multiplication means, the photon flux provided by the phosphor screen corresponds to the flux of photons incident on the photocathode, but in more intense. In other words, each photon of the photon flux incident on the photocathode corresponds to several photons of the photon flux supplied by the phosphor screen. The photocathode and the multiplying means are placed in a vacuum tube having an entrance window to let the incident photon flux enter the photocathode. The vacuum tube can be closed by the phosphor screen. When the incident photon flux on the photocathode is converted into an initial electron flux, the information relating to the wavelength of the photons is lost. Thus, the photon flux provided by the phosphor screen corresponds to a monochrome image. GB 2 302 444 proposes an image intensifier tube for rendering a poly-chromatic image. A first primary color filter array is disposed upstream of the photocathode to filter incident photon flux before it reaches the photocathode. A primary color filter is a spectral filter, which does not transmit part of the visible spectrum complementary to this primary color. Thus, a primary color filter is a spectral filter that transmits part of the visible spectrum corresponding to this primary color, and possibly part of the infrared spectrum, and even part of the near-UV spectrum (200 to 400 nm) or even UV (10 to 200 nm). The first primary color filter array consists of red, green, and blue filters that draw primary color pixels on the photocathode. Thus, a photon flux incident on a given pixel of the photocathode corresponds to a given primary color. The electron flow supplied in response by the photocathode does not directly contain chromatic information, but corresponds to this given primary color.
[0005] At the output of the intensifier tube, the photon flux supplied by the phosphor screen corresponds to a white light, a combination of several wavelengths corresponding in particular to red, green and blue. This stream is filtered by a second matrix of primary color filters. This second matrix draws pixels of primary color on the phosphor screen. Thus, a flux of photons emitted by a given pixel of the phosphor screen is filtered by a primary color filter. At the output of this primary color filter, a flux of photons corresponding to a given primary color is obtained. The second matrix is identical to and aligned with the first matrix. The pixels of the phosphor screen are therefore aligned with the pixels of the photocathode. The image supplied at the output of the second matrix is thus composed of pixels of three primary colors, corresponding to an intensified image of the pixelated image at the output of the first matrix. This produces a night vision intensifier tube providing a color image. However, because of the presence of the two matrices of primary color filters, this intensifier tube has high energy losses, detrimental in a field characterized by the need for a strong intensification of a photon flux. An object of the present invention is to provide an image acquisition device for acquiring color images while minimizing the damage caused by energy losses.
[0006] DISCLOSURE OF THE INVENTION This object is achieved with an image acquisition device comprising: a photocathode adapted to convert an incident flux of photons into a stream of electrons; a sensor consisting of a matrix of elements, called pixels; and processing means. According to the invention: the device comprises a matrix of elementary filters, each associated with at least one pixel of the sensor, said matrix being disposed upstream of the photocathode, so that an initial flow of photons passes through said matrix before reaching the photocathode; the matrix comprises primary color filters, a primary color filter not transmitting a part of the visible spectrum complementary to said primary color, and filters transmitting the entire visible spectrum, called panchromatic filters; and the processing means are adapted to: calculate a quantity, called useful quantity, to determine if at least one zone of the sensor is in conditions of low or high illumination, the useful quantity being representative of an average surface flux of photons or electrons detected on a set of so-called panchromatic pixels of the sensor, each panchromatic pixel being associated with a panchromatic filter; only if said zone is in conditions of high illumination, forming a color image of said zone from the pixels of this zone associated with primary color filters.
[0007] According to an advantageous embodiment, the photocathode is disposed inside a vacuum chamber, and the matrix of elementary filters is located on an inlet window of said vacuum chamber.
[0008] In a variant, the photocathode is placed inside a closed vacuum chamber by a bundle of optical fibers, and each elementary filter of the matrix of elementary filters is deposited on an end of an optical fiber of said bundle.
[0009] The sensor may be a photosensitive sensor, the processing means may be adapted to calculate a magnitude representative of a mean surface flux of photons, and the device may further comprise: multiplication means adapted to receive the flow of electrons emitted by the photocathode, and in response providing a secondary flow of electrons; and a phosphor screen, adapted to receive the secondary electron flow and to provide in response a photon flux, said useful photon flux, the sensor being arranged to receive said useful photon flux. In a variant, the sensor may be an electron-sensitive sensor adapted to receive the electron flux emitted by the photocathode, and the processing means may be adapted to calculate a magnitude representative of an average electron surface flux. Preferably, the panchromatic filters represent 75% of the elementary filters. The matrix of elementary filters is advantageously generated by the two-dimensional periodic repetition of the following pattern: ## EQU1 ## where R, G, B respectively represent primary red, green, and blue color filters, and W represents a filter panchromatic, the pattern being defined at a permutation near R, G, B.
[0010] Alternatively, the elementary filter matrix may be generated by the two-dimensional periodic repetition of the following pattern: 1YeW Ma WM 1W WW = MaWCyW WWWW where Ye, Ma, Cy represent primary yellow, magenta and cyan primary color filters respectively , and W represents a panchromatic filter, the pattern being defined at a permutation near Ye, Ma, Cy. Preferably, the processing means are adapted to: determine that said zone is at low illumination, if the useful magnitude is less than a first threshold; and determining that said zone is at high illumination, if the useful magnitude is greater than a second threshold, the second threshold being greater than the first threshold. If the useful magnitude is between the first and second thresholds, the processing means are advantageously adapted to combining a monochrome image and the color image of said zone, the monochrome image of said zone being obtained from the panchromatic pixels of this zone. zoned. Preferably, the processing means are adapted to: form a monochrome image from all the panchromatic pixels of the sensor; segment this monochrome image into homogeneous regions; and for each zone of the sensor associated with a homogeneous region, independently calculating the corresponding useful magnitude to determine whether said zone is in low or high illumination conditions.
[0011] The matrix of elementary filters may furthermore comprise infrared filters that do not transmit the visible part of the spectrum, with each infrared filter being associated with at least one pixel of the so-called infrared pixel sensor. When a zone is in low illumination conditions, the processing means are advantageously adapted to: comparing a predetermined infrared threshold and a magnitude, called secondary magnitude, representative of an average surface flux of photons or electrons detected by the infrared pixels of this area; when said secondary quantity is greater than the predetermined infrared threshold, superimpose a monochrome image obtained from the panchromatic pixels of this zone and a false-color image obtained from the infrared pixels of this zone. As a variant, when a zone is in conditions of low illumination, the processing means are advantageously adapted to: starting from the infrared pixels of this zone, identifying sub-zones of this zone, detecting an average surface flux of photons or homogeneous electron in the infrared spectrum; for each sub-zone thus identified, comparing a predetermined infrared threshold and a quantity, called secondary quantity, representative of an average surface flux of photons or electrons detected by the infrared pixels of this sub-zone; when said secondary quantity is greater than the predetermined infrared threshold, superimpose a monochrome image obtained from the panchromatic pixels of this sub-zone and a false-color image obtained from the infrared pixels of this sub-zone.
[0012] The matrix of elementary filters may consist of an image projected by a projection optical system. The invention also relates to a method of forming an image, implemented in a device comprising a photocathode adapted to convert an incident flux of photons into an electron flux, and a sensor, the method comprising the following steps: filtering an initial photon flux, for providing said incident photon flux, this filtering implementing a matrix of elementary filters comprising primary color filters, a primary color filter not transmitting a portion of the visible spectrum complementary to said color primary, and filters transmitting the entire visible spectrum, called panchromatic filters; calculating a quantity, called useful quantity, to determine if at least one zone of the sensor is in conditions of low or high illumination, the useful quantity being representative of an average surface flux of photons or electrons detected on a set of so-called panchromatic pixels of the sensor, each panchromatic pixel being associated with a panchromatic filter; only if said zone is in conditions of high illumination, forming a color image of said zone from the pixels of this zone associated with primary color filters.
[0013] BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be better understood on reading the description of exemplary embodiments given purely by way of indication and in no way limiting, with reference to the appended drawings in which: FIG. 1 schematically illustrates the principle of a device according to the invention; FIG. 2 schematically illustrates a first embodiment of a processing implemented by the processing means according to the invention; FIGS. 3A and 3B schematically illustrate two variants of a first embodiment of a matrix of elementary filters according to the invention; FIG. 4 schematically illustrates a first embodiment of a device according to the invention; FIGS. 5A and 5B schematically illustrate two variants of a second embodiment of a device according to the invention; FIG. 6 schematically illustrates a second embodiment of a matrix of elementary filters according to the invention; and FIG. 7 schematically illustrates a second embodiment of a processing implemented by the processing means according to the invention. DETAILED DESCRIPTION OF PARTICULAR EMBODIMENTS FIG. 1 schematically illustrates the principle of an image acquisition device 100 according to the invention. The device 100 comprises a photocathode 120, operating as described in the introduction, and a matrix 110 of elementary filters 111 located upstream of the photocathode. For example, a GaAs photocathode (gallium arsenide) is used. Any other type of photocathode, in particular sensitive photocathodes, can be used in a widest wavelength spectrum, including the visible (about 400 to 800 nm), and possibly the near infra-red or even the same. infra-red, and / or the near UV (ultraviolet), or even the UV. Each elementary filter 111 filters the light incident on a location of the photocathode 120. Each elementary filter 111 thus defines a pixel on the photocathode 120. The elementary filters 111 are transmission filters of at least two different categories: color filters primary, and transparent (or panchromatic) filters. A primary color elementary filter is defined in the introduction. The elementary filters of the matrix 110 include three types of primary color filters, i.e. filters of three primary colors. This allows an additive or subtractive synthesis of all the colors of the visible spectrum. In particular, each type of primary color filter transmits only a portion of the visible spectrum, i.e., a band of the 400-700 nm wavelength range, and the different types of primary color pixels. cover all this gap. In addition to a portion of the visible spectrum, each primary color filter can transmit a portion of the near infra-red or even infra-red spectrum and / or a portion of the near UV or UV spectrum. The color filters can be red, green, blue filters, in the case of an additive synthesis, or yellow, magenta, cyan filters, in the case of a subtractive synthesis. Other sets of primary colors may be contemplated by those skilled in the art without departing from the scope of the present invention. The panchromatic elementary filters let pass the whole visible spectrum. Where appropriate, they may also transmit at least a portion of the near-infrared and even infrared spectrum and / or at least a portion of the near UV and even UV spectrum. The panchromatic elementary filters may be transparent elements in the visible, or openings (or savings) in the matrix 110. In this second case, the pixels of the photocathode located below these elementary panchromatic filters receive an unfiltered light.
[0014] The different types of primary color filters, and the panchromatic filters, are scattered on the elementary filter matrix. The elementary filters are advantageously arranged in the form of a periodic repeating pattern in two distinct, generally orthogonal directions in the plane of the photocathode 120. Each pattern preferably comprises at least one primary color filter of each type. , and panchromatic filters. Although elementary filters of square shape have been illustrated, these may have any other geometrical shape, for example a hexagon, a disk, or a surface defined according to constraints relating to the transfer function of the device 100 according to the invention. The matrix of elementary filters according to the invention can be real, or virtual. The matrix of elementary filters is said to be real when it comprises elementary filters having a certain thickness, for example elementary filters made of polymer material or interference filters. The matrix of elementary filters is called virtual when it consists of an image of a second matrix of elementary filters, projected upstream of the photocathode. In this case, the second matrix of elementary filters consists of a real matrix of elementary filters. It is located in the object plane of a projection optical system. The image formed in the image plane of this projection optical system corresponds to said matrix of virtual elementary filters. An advantage of this variant is that it eliminates any difficulties in positioning a real matrix at the desired location. In the set of examples developed with reference to the figures, the example of a real elementary filter matrix has been developed. Many variants can be envisaged, by replacing the matrix of real elementary filters by a matrix of virtual elementary filters. Preferably, the device according to the invention will then comprise the second matrix of elementary filters and the projection optical system, as mentioned above.
[0015] Preferably, but in a nonlimiting manner, the proportion of panchromatic elementary filters in the matrix 110 is greater than or equal to 50%. Advantageously, the proportion of panchromatic elementary filters is equal to 75%. The elementary filters of primary color can be divided in equal proportions. As a variant, the elementary filters of primary color are distributed in unequal proportions. Preferably, the proportion of a first type of primary color filter does not exceed twice the proportion of the other types of primary color filters. For example, the proportion of panchromatic elementary filters is equal to 75%, the proportion of filters of a first primary color is equal to 12.5%, and the proportion of filters of a second and a third primary color is equal to at 6.25% and 6.25%. Matrix 120 receives an initial photon flux. For illustrative purposes, initial elementary fluxes of photons 101, each associated with an elementary filter 111, are represented. The initial elementary fluxes of photons 101 together form a poly-chromatic image, and may comprise photons located in the visible spectrum, which are close to one another. infrared and even infrared. An elementary filter 111 transmits a filtered elementary flux 102, the filtered elementary streams together forming a flux of photons incident on the photocathode. In response to this incident flux of photons, the photocathode 120 emits a stream of electrons. Each filtered elementary flux 102 corresponds to an elementary electron flux 103. An elementary electron flux 103 is all the more important that the corresponding filtered elementary flux 102 comprises photons. The elementary electron fluxes 103 do not directly convey chromatic information, but depend directly on a number of photons transmitted by a corresponding elementary filter 111. The elementary fluxes of electrons 103 together form a stream of electrons emitted by the photocathode 120. The device 100 according to the invention furthermore comprises a digital sensor 130. As detailed below, the sensor 130 can directly receive the flow of Electrons emitted by the photocathode 120. Alternatively, this electron flow emitted by the photocathode 120 can be converted into a photon flux so that the sensor 130 finally receives a stream of photons. FIG. 1 being a simple illustration of principle, the sensor 130 is shown directly after the photocathode 120. The sensor 130 may be a photon-sensitive or electron-sensitive sensor, and other elements may be interposed between the photocathode 120 and the sensor 130. The sensor is sensitive to electrons as emitted by the photocathode, or to photons obtained from these electrons.
[0016] Preferably, the sensor is sensitive to: - photons located in the 400-900 nm band, or even 400-1100 nm, or even a spectral band ranging from UV to near infrared, for example 2001100 nm; or electrons from photons in this band. The sensor is formed by a matrix of elements, said pixels 131, sensitive to photons or electrons. Each elementary filter 111 is associated with at least one pixel 131 of the sensor. In other words, each elementary filter 111 is aligned with at least one pixel 131 of the sensor, so that a major part of a flow of electrons or photons, resulting from the photons transmitted by this elementary filter 111, reaches this at least one pixel 131. Preferably, each elementary filter 111 is associated with exactly one pixel 131 of the sensor. Preferably, the surface of an elementary filter 111 corresponds to the surface of a pixel 131 of the sensor or to a surface corresponding to the juxtaposition of an integer number of pixels 131 of the sensor. Since each elementary filter 111 is associated with at least one pixel 131 of the sensor, the term "panchromatic pixel" can be called a pixel of the sensor associated with a panchromatic elementary filter, and "primary color pixel" a pixel of the sensor associated with an elementary filter. of primary color. The panchromatic pixels detect electrons or photons associated with the spectral band transmitted by the panchromatic filters. Each type of primary color pixel detects electrons or photons associated with the spectral band transmitted by the corresponding primary color filter type. The sensor 130 is connected to processing means 140, that is to say calculation means including a processor or a microprocessor. The processing means 140 receive as input electrical signals supplied by the sensor 130, and corresponding, for each pixel 131, to the stream of photons received and detected by this pixel when the sensor is sensitive to photons, or to the electron flow received. and detected by this pixel when the sensor is sensitive to electrons. The processing means 140 output an image, corresponding to the initial flux of incident photons on the matrix of elementary filters, this flux having been intensified.
[0017] The processing means 140 are adapted to assign, to each pixel of the sensor, information on a type of elementary filter associated with this sensor. For this purpose, they store information making it possible to connect each pixel of the sensor and a type of elementary filter. This information can be in the form of a deconvolution matrix. Thus, the spectral information that is lost during the passage through the photocathode, is restored by the processing means 140. The processing means 140 are adapted to implement a treatment, as shown in Figure 2.
[0018] According to the first embodiment as detailed below, the processing means realize a monochrome image by interpolation of all the panchromatic pixels of the sensor. This image is called "monochrome image of the sensor". They then implement a segmentation of the sensor in several zones, each zone being homogeneous in terms of the flux of photons or electrons detected by the corresponding panchromatic pixels. Such a segmentation is for example described in the article by S. Tripathi et al. entitled "Image Segmentation: A Review" published in International Journal of Computer Science and Management Research, Vol. 1, No. 4, Nov. 2012, pp. 838- 843. The processing means then implement the following steps. In a first step 280, it is estimated a magnitude F, representative of an average surface flux of photons or electrons received and detected by the panchromatic pixels of an area of the sensor, respectively sensitive to photons or electrons. This quantity is called "useful size". The useful magnitude may be equal to the average surface flux of photons or electrons. If the sensor 130 is sensitive to photons, the useful magnitude may be an average luminance on the panchromatic pixels of the sensor area. In a second step 281, the useful magnitude F is compared with a threshold value Fth. If the useful magnitude F is greater than the threshold value Fth, the area of the sensor is in conditions of high illumination. If the useful magnitude F is smaller than the threshold value Fth, the sensor area is in low light conditions. Steps 280 and 281 together form a step to determine whether the sensor zone 130 is in low or high light conditions. A strong illumination corresponds for example to the acquisition of an image of a night scene, illuminated by the moon (night level 1 to 3). A low illumination corresponds for example to the acquisition of an image of a night scene, not illuminated by the moon (night level 4 to 5, ie a luminous illumination of less than 500 uLux). If the area is under high illumination conditions, a color image of this area is formed using the primary color pixels of this area (step 282A). It is said that the device operates in high illumination mode. In particular, an image is formed of each primary color, and the images of each primary color are combined with each other. An image of a primary color is formed by interpolating the pixels of this area associated with said primary color. The interpolation makes it possible to compensate for the small proportion of pixels of the sensor of a given primary color. The interpolation of the pixels of a primary color consists in using the values taken by these pixels to estimate the values that would be taken by the neighboring pixels if they were also pixels of this primary color. Primary color images can be optionally processed to sharpen image sharpening. For example, one can obtain a monochrome image of the area by interpolating the panchromatic pixels of this area, and combine this monochrome image, where appropriate after high-pass filtering, with each primary color image of the same area. As the proportion of panchromatic pixels in the matrix is higher than that of the primary color pixels, the resolution of the primary color images is thus improved. If the zone is in low illumination conditions, a monochrome image of said zone is formed from the panchromatic pixels of this zone. In particular, a monochrome image is formed using the panchromatic pixels of this area (step 282B), and without using the primary color pixels of this area. Here again, the monochrome image can be obtained by interpolation of the panchromatic pixels of this zone. The device is said to operate in low illumination mode.
[0019] It is important to note that the distinction between low illumination and high illumination is based on a measurement obtained from the panchromatic pixels of the sensor, therefore for the entire spectrum detected by such a sensor that is to say for at least all visible spectrum. These steps are performed for each zone of the previously identified sensor. Then, the color or monochrome images of the different areas of the sensor are combined to obtain an image of the entire sensor. The image of the entire sensor can be displayed, or stored in memory for further processing.
[0020] As a variant, a color image is formed of each zone of strong illumination, then, in the monochrome image of the sensor used for segmentation, the zones corresponding to these zones of high illumination are replaced by the color images of these zones. According to another variant, a linear combination of the monochrome image of the sensor and these color images is performed. Thus, in high-light regions, the color image and the monochrome image are superimposed. In the example just described, the zones of the sensor are treated separately. Alternatively, it is determined whether the entire sensor is in conditions of low or high illumination, and is treated in the same way the entire sensor. In this case, there is no segmentation of the monochrome image of the sensor, or combination of the images obtained. Steps 280, 281 and 282A or 282B are implemented over the entire surface of the sensor. In other words, the sensor zone as mentioned above corresponds to the entire sensor.
[0021] Thus, the processing means 140 receive as input signals from the sensor, store information for associating each pixel of the sensor with a type of elementary filter, and output a color image, or a monochrome image or a combination of a color image and a monochrome image. The invention thus provides an image acquisition device for acquiring a color image of an area of the sensor, when the illumination of the scene detected on this area allows it. When this illumination becomes insufficient, the device provides an image of the area obtained from the panchromatic elementary filters, thus with a minimal energy loss. The device automatically selects one or the other mode of operation. Note that no second matrix of elementary filters is present on the sensor 130, since it suffices to take into account, during processing, the fact that a particular sensor pixel is associated with such or such elementary filter located upstream of the photocathode. An image acquisition device having a high energy efficiency is thus produced. According to a first variant of this first embodiment, switching from one mode to another operates with hysteresis so as to avoid any switching noise (chattering). To do this, a first threshold for the useful magnitude is provided for the transition from the high illumination mode to the low illumination mode and a second threshold for the useful magnitude is provided for the inverse transition, the first threshold being chosen lower than the second threshold. According to a second variant of the first embodiment, the switching from one mode to the other is done progressively through a transition phase. Thus, the image acquisition device operates in low illumination mode when the useful magnitude is less than a first threshold and in high illumination mode when it is greater than a second threshold, the second threshold being chosen greater than the first threshold. When the useful magnitude is between the first and second thresholds, the image acquisition device performs a linear combination of the image obtained by the treatment in high illumination mode and that obtained by the low light mode treatment, the weighting coefficients being given by the deviations of the useful magnitude with the first and second thresholds respectively.
[0022] Ideally, each elementary filter 111 is aligned with at least one pixel 131 of the sensor, so that each pixel of the sensor associated with an elementary filter receives only photons or electrons corresponding to this elementary filter. However, there may be a spatial spreading through the device according to the invention, including a spatial spread of the electron flow emitted by the photocathode. This disadvantage can be countered by an initial calibration step making it possible to compensate for the misalignment between an elementary filter and a sensor pixel. This calibration aims to compensate for the slight degradation due to the transfer function of the optical elements of the device according to the invention (photocathode and, if appropriate, multiplication means and phosphor screen). During this calibration, the matrix of elementary filters is illuminated in turn with different monochromatic light beams (each corresponding to one of the primary colors of the primary color filters), and the signal received by the sensor 130 is measured. deduces a deconvolution matrix, which is stored by the processing means 140. In operation, the processing means 140 multiply the signals transmitted by the sensor by the deconvolution matrix. Thus, after multiplication by the deconvolution matrix, the signals were reconstructed as they would be transmitted by the sensor under ideal conditions, without spatial spreading. Each primary color filter (and optionally each infrared filter, see below) is preferably entirely surrounded by panchromatic filters. Thus, in the case of spatial spreading of the electron flux emitted by the photocathode, the calibration is simplified. Alternatively or additionally, the geometric shape of the filters composing the matrix of elementary filters is calibrated so as to compensate for the effect of said spatial spread. After deformation by the optical elements of the device according to the invention (photocathode and, if appropriate, multiplication means and phosphor screen), the image of an elementary filter is then superimposed perfectly on one or more pixels of the sensor.
[0023] Interstices between adjacent elementary filters are advantageously opaque, in order to block any radiation likely to reach the photocathode without having passed through an elementary filter. FIGS. 3A and 3B schematically illustrate two variants of a first embodiment of a matrix 110 of elementary filters according to the invention. In Fig. 3A, the primary color elementary filters are red (R), green (G) or blue (B) filters. The matrix has 75% panchromatic filters (W).
[0024] The matrix 110 is generated by a two-dimensional periodic repetition of the 4x4 base pattern: ## EQU1 ## Variants of this matrix can be obtained by permutation of the R, G, B filters in the pattern (1). The green pixels are twice as many as the red pixels, respectively blue. This imbalance can be corrected by weighting coefficients adapted when combining three primary color images to form a color image.
[0025] The matrix of FIG. 3B corresponds to the matrix of FIG. 3A, in which the primary color elementary filters R, G, B are replaced respectively by elementary primary color filters yellow (Ye), magentas (Ma), cyans ( Cy). Again, the filters Ye, Ma, Cy can be switched. According to an unrepresented variant of the matrix represented in FIG. 3A, the panchromatic filters representing 50% of the elementary filters, and the elementary pattern is the following: 11/17 RWG WKWBY WBW (2) with X = R, G or B , Y = R, G or B, and Y = X. Again, the filters R, G, B can be switched. Alternatively, the filters R, G, B of the pattern (2) are replaced by filters Ye, Ma, Cy. Figure 4 schematically illustrates a first embodiment of a device 400 according to the invention. Figure 4 will only be described for its differences with respect to Figure 1. The use of a calibration step as detailed above, is particularly advantageous in this embodiment. The device 400 is based on the technology called intensified CMOS or intensified CCD (ICMOS or ICCD, for the English "Intensified CMOS") or "Intensified CCD").
[0026] The photocathode 420 is disposed inside a vacuum tube 450 of the vacuum tube type of an image intensifier tube according to the prior art and as described in the introduction. A vacuum tube designates a vacuum chamber having more particularly a tube shape. The vacuum tube 450 has an inlet window 451, transparent in particular in the visible, and optionally in the near infrared or even the infrared. The input window allows to let enter, inside the vacuum tube, the flux of photons incident on the photocathode. The entrance window is in particular glass. The input window is preferably a simple plate. The matrix of elementary filters 410 is glued on one face of the inlet window 451, preferably on the inside of the vacuum tube. The photocathode is pressed against the matrix of elementary filters 410. A metal layer (not shown) may be deposited on the input window, around the matrix of elementary filters 410, to form an electrical contact point for the application. an electrostatic field. Downstream of the photocathode 420 are multiplication means 461 and a phosphor screen 462 as described in the introduction. The phosphor screen emits a stream of photons, called useful flux, which is received by the sensor 430. The sensor 430 is photosensitive. It is in particular a CCD (Charge-Coupled Device) sensor, or a CMOS sensor (Complementary Metal Oxide Semiconductor). In FIG. 4, the sensor 430 is represented inside the vacuum tube, the tube being traversed by electrical connections between the sensor 430 and the processing means 440.
[0027] The processing means 440 operate as described with reference to FIG. 2, the useful quantity being representative of the surface flux of photons detected by the panchromatic pixels of the sensor 430. The sensor 430 may be in direct contact with the phosphor screen, in order to limit a possible spatial spread of the photon beam emitted by the phosphor screen. In this case, the sensor 430 may be inside the vacuum tube, or outside and against an outlet face of the vacuum tube, formed by the phosphor screen. The sensor 430 can be offset outside the vacuum tube 450. In particular, an optical fiber bundle can connect the phosphor screen and the pixels of the sensor 430, the optical fiber bundle forming an exit window of the tube. empty. Such an optical fiber bundle is particularly suitable in the case where the surface of the sensor 430 is smaller than the inside diameter of the vacuum tube. In this case, each fiber has a diameter on the phosphor screen side greater than its diameter on the sensor side. The bundle of optical fibers is said to thin, and performs a reduction of the image provided by the phosphor screen. FIGS. 5A and 5B schematically illustrate two variants of a second embodiment of a device 500 according to the invention. FIG. 5A will only be described for its differences with respect to FIG. 1. The device 500 is based on electro-bombarded CMOS technology, or EBCMOS for English "Electron Bombarded CMOS".
[0028] The photocathode 520 is disposed inside a vacuum tube 550. The vacuum tube 550 has an inlet window 551, transparent in particular in the visible, and if necessary in the near infrared or even the infrared . The elementary filter matrix 510 is adhered to one face of the input window 551, preferably on the inside of the vacuum tube. The sensor 530 is disposed inside the vacuum tube 550, and directly receives the stream of electrons emitted by the photocathode. The photocathode 520 and the sensor 530 are within a few millimeters of each other, and subjected to a potential difference to create an electrostatic field in the interstice between them. This electrostatic field accelerates the electrons emitted by the photocathode 520, towards the sensor 530. The sensor 530 is sensitive to electrons. It is typically a CMOS sensor, adapted to make it sensitive to electrons.
[0029] According to a first variant, the electron-sensitive sensor is illuminated on the back side ("bock sicle illuminated"). For this, one can use a CMOS sensor whose substrate is thinned and passivated (in English, "back-thinned"). The sensor may include a passivation layer, forming an outer layer on the side of the photocathode. The passivation layer is deposited on the thinned substrate. The substrate receives detection diodes, each associated with a pixel of the sensor. According to a second variant, the electron-sensitive sensor is illuminated on the front face. For this purpose, it is possible to use a CMOS sensor whose front face is treated so as to remove the protective layers covering the diodes. The front face of a standard CMOS sensor is thus made sensitive to electrons. The processing means 540 operate as described with reference to FIG. 2, the useful quantity being representative of the surface flux of electrons detected by the panchromatic pixels of the sensor 530.
[0030] FIG. 5B illustrates a variant of the device 500 of FIG. 5A, in which the vacuum tube 550 is closed by a bundle 552 of optical fibers receiving the matrix of elementary filters. According to this variant, the beam 552 of optical fibers is traversed by photons coming from the scene to be imaged. A first end of the fiber optic bundle 552 closes the vacuum tube. A second end of the beam 552 of optical fibers is in front of the scene to be imaged. The vacuum tube 550 no longer has the input window 551, which is replaced by the optical fiber bundle which allows the vacuum tube of the scene to be imaged to be displaced. Each elementary filter of the matrix 510 is associated with an optical fiber of the beam 552. In particular, each elementary filter is directly attached to an optical fiber end, advantageously on the opposite side to the vacuum tube. In this case, the matrix of elementary filters 510 is outside the vacuum tube, which simplifies its assembly. Alternatively, each elementary filter is directly attached to an end of optical fiber, the side of the vacuum tube. Similarly, a variant of the device described with reference to FIG. 4 can be made in the same way. FIG. 6 schematically illustrates a second embodiment of a matrix of elementary filters according to the invention. The matrix of elementary filters of FIG. 6 differs from the previously described matrices in that it comprises infrared (IR) filters, not transmitting the visible part of the spectrum and allowing the near infrared to pass. Infrared filters let the wavelengths pass in the near infrared, or even in the infrared (wavelengths greater than 700 nm). Infrared filters transmit in particular the spectral band between 700 and 900 nm, or even between 700 and 1100 nm, and even between 700 and 1700 nm. The filter matrix of FIG. 6 differs from the matrix of FIG. 3A in that in the elementary pattern, one of the two green pixels (G) is replaced by an infrared (IR) pixel. Different variants of the matrix of FIG. 6 can be formed in the same way, for example from the matrix of FIG. 3B and replacing by an infrared pixel one of the two magenta pixels of the elementary pattern. According to other variants, the elementary pattern (2) as defined above is taken up again, defining X = Y = IR. FIG. 7 schematically illustrates a processing implemented by the processing means according to the invention, when the matrix of elementary filters comprises infrared pixels. Steps 780, 781 and 782B respectively correspond to steps 280, 281 and 282B as described with reference to FIG. 2. When a zone of the sensor is in conditions of low illumination, the processing means measure a quantity, called secondary quantity, representative of the average surface flux of photons or electrons FIR detected by the infrared pixels of this area (step 783). In particular, this average surface flux is an average surface flux of photons if the sensor is photosensitive, or an average surface flux of electrons if the sensor is sensitive to electrons. The processing means then make a comparison between this secondary quantity and an infrared threshold FIR th (step 784).
[0031] If the secondary variable FIR is smaller than the infrared threshold FIR th, a color image of the zone is constructed, as described with reference to FIG. 2, with respect to step 282A (step 782A). If the secondary variable FIR is greater than the infrared threshold FIR th, a false-color image of the zone is constructed, that is to say an image in which a given color is attributed to the infrared pixels of this zone. The false color image can be constructed by interpolating the infrared pixels of the considered area. The false color image is therefore a monochrome image, of a different color from the monochrome image associated with the panchromatic pixels. Then, this false-color image is superimposed on the monochrome image obtained using the panchromatic pixels of the same area of the sensor. These steps of constructing a false color image and superposition with the monochrome image together form a step 782C.
[0032] Thus, for an area located in low light conditions, one obtains either a monochrome image or the image overlay as defined above. In summary, when a zone is in low illumination conditions, it is tested whether the infrared pixels belonging to this zone have an intensity greater than a predetermined infrared threshold and, if so, it is superimposed on the monochrome image of this zone. zone infrared pixels shown in false color. This embodiment is particularly advantageous for laser detection applications. According to a first variant, a single secondary quantity for the same zone is not calculated, but a secondary quantity per infrared pixel of the zone is calculated separately. Only the infrared pixels, for which the corresponding secondary quantity is greater than the infrared threshold, are superimposed on the monochrome image obtained from the panchromatic pixels. Thus, if a sensor zone has a high intensity in the infrared range, it will be easily identifiable in the resulting image. According to another variant, sub-zones of said sensor zone are identified, detecting an average surface flux of pixels or electrons homogeneous in the infrared spectrum, and each sub-zone is then treated separately as detailed above. In other words, the comparison with the infrared threshold is done by homogeneous sub-areas of the sensor. For each sub-area of the sensor for which the secondary magnitude is greater than the infrared threshold, a false color image is obtained by interpolation of the infrared pixels of said sub-zone. These false color images are then superimposed on the corresponding locations on the monochrome image of the sensor area. To identify such sub-areas, a segmentation is performed on the basis of an image made by interpolation of the infrared pixels. In summary, when a zone is in low illumination conditions, sub-zones of this zone are identified, having a uniform intensity in the infrared spectrum, and it is determined, for each sub-zone thus identified, whether the The average of the infrared intensity in this sub-area is greater than a predetermined infrared threshold and, if so, this sub-area is represented by a false-color image based on the infrared pixels of that sub-area. false-color image of said sub-area then being superimposed with the monochrome image of the area to which it belongs. The infrared pixels of the sensor can also be used to improve a signal-to-noise ratio on a final color image. For this, when a zone of the sensor is in conditions of high illumination, an infrared image of this zone is produced by interpolation of the infrared pixels of the sensor. This infrared image is then subtracted from the color image of this zone, obtained as detailed with reference to FIG. 2. The subtraction of the infrared image makes it possible to improve the signal-to-noise ratio. To avoid saturation problems, a weighted infrared image can be subtracted from each of the primary color images. The weighting coefficients assigned to the infrared image may be identical or different, for each primary color image. Fragmented primary color images are obtained which are combined to form a denuded color image. Thus, the processing means are adapted to implement the following steps: calculating the useful magnitude, to determine if at least one zone of the sensor is in conditions of low or high illumination; only if said zone is in conditions of high illumination, forming a color image of said zone from the pixels of this zone associated with primary color filters, and subtracting an infrared image from said zone obtained from the infrared pixels of this zone (for example by interpolation of said infrared pixels).
权利要求:
Claims (16)
[0001]
REVENDICATIONS1. An image acquisition device (100; 400; 500) comprising: a photocathode (110; 410; 510) adapted to convert an incident photon flux to an electron stream; a sensor (130; 430; 530) consisting of a matrix of elements, called pixels; and processing means (140; 440; 540); characterized in that: the device (100; 400; 500) comprises a matrix (120; 420; 520) of elementary filters, each associated with at least one pixel of the sensor, said array being arranged upstream of the photocathode, so an initial flow of photons passes through said matrix before reaching the photocathode; the matrix comprises primary color filters (R, G, B, Ye, Ma, Cy), a primary color filter not transmitting a portion of the visible spectrum complementary to said primary color, and filters transmitting the entire spectrum visible, said panchromatic filters (W); and the processing means (140; 440; 540) are adapted to: calculate a magnitude, called useful magnitude (F), to determine if at least one zone of the sensor is in low or high illumination conditions, the useful magnitude being representative of an average surface flux of photons or electrons detected on a set of so-called panchromatic pixels of the sensor, each panchromatic pixel being associated with a panchromatic filter (W); only if said zone is in conditions of high illumination, forming a color image of said zone from pixels of this zone associated with primary color filters.
[0002]
2. Device (400; 500) according to claim 1, characterized in that the photocathode (420; 520) is disposed inside a vacuum chamber (450; 550) and in that the filter matrix elementals (420; 520) is located on an entrance window (451, 551) of said vacuum chamber.
[0003]
3. Device (500) according to claim 1, characterized in that the photocathode (520) is disposed inside a vacuum chamber (550) closed by a bundle of optical fibers (552), and in that each elementary filter of the elementary filter matrix (510) is deposited on an end of an optical fiber of said beam (552).
[0004]
4. Device (400) according to any one of claims 1 to 3, characterized in that the sensor (430) is a photosensitive sensor, in that the processing means (440) are adapted to calculate a magnitude representative of an average surface flux of photons, and in that the device further comprises: multiplication means (461), adapted to receive the flow of electrons emitted by the photocathode, and to provide in response a secondary electron flow; and a phosphor screen (462), adapted to receive the secondary electron flux and to provide in response a photon flux, said useful photon flux, the sensor (430) being arranged to receive said useful photon flux.
[0005]
5. Device (500) according to any one of claims 1 to 3, characterized in that the sensor (530) is an electron-sensitive sensor, adapted to receive the electron flow emitted by the photocathode, and in that the processing means (540) are adapted to calculate a magnitude representative of an average surface flux of electrons.
[0006]
6. Device (100; 400; 500) according to any one of claims 1 to 5, characterized in that the panchromatic filters represent 75% of the elementary filters.
[0007]
7. Device (100; 400; 500) according to claim 6, characterized in that the matrix of elementary filters (110; 410; 510) is generated by the two-dimensional periodic repetition of the following pattern: IR WGWM 1 WWWW = G WB W Where R, G, B represent red, green, blue primary color filters, and W represents a panchromatic filter, the pattern being defined at a permutation near R, G, B.
[0008]
8. Device (100; 400; 500) according to claim 6, characterized in that the matrix of elementary filters (110; 410; 510) is generated by the two-dimensional periodic repetition of the following pattern: 1Ye W Ma WM = 1 WWWW MaWCyW Where Ye, Ma, Cy represent primary yellow, magenta and cyan primary color filters, and W represents a panchromatic filter, the pattern being defined at a permutation near Ye, Ma, Cy.
[0009]
9. Image acquisition device (100; 400; 500) according to one of claims 1 to 8, characterized in that the processing means are adapted to: determine that said area is low illumination, if the useful size (F) is less than a first threshold; and determining that said zone is at high illumination, if the useful magnitude (F) is greater than a second threshold, the second threshold being greater than the first threshold.
[0010]
10. An image acquisition device (100; 400; 500) according to claim 9, characterized in that, if the useful magnitude (F) is between the first and second thresholds, the processing means are adapted to combine a monochrome image and the color image of said zone, the monochrome image of said zone being obtained from the panchromatic pixels of this zone.
[0011]
11. An image acquisition device (100; 400; 500) according to one of claims 1 to 10, characterized in that the processing means are adapted to: form a monochrome image from the set of pixels panchromatic sensor; segment this monochrome image into homogeneous regions; and for each zone of the sensor associated with a homogeneous region, independently calculating the corresponding useful magnitude to determine whether said zone is in low or high illumination conditions.
[0012]
12. An image acquisition device (100; 400; 500) according to one of claims 1 to 11, characterized in that the matrix of elementary filters (110; 410; 510) further comprises infrared filters (IR ) not transmitting the visible portion of the spectrum, each infrared filter being associated with at least one pixel of said infrared pixel sensor.
[0013]
13. An image acquisition device (100; 400; 500) according to claim 12, characterized in that, when a zone is in conditions of low illumination, the processing means are adapted to: compare an infrared threshold predetermined (FiR th) and a quantity, called secondary magnitude (FiR), representative of an average surface flux of photons or electrons detected by the infrared pixels of this zone; when said secondary quantity is greater than the predetermined infrared threshold, superimpose a monochrome image obtained from the panchromatic pixels of this zone and a false-color image obtained from the infrared pixels of this zone.
[0014]
14. An image acquisition device (100; 400; 500) according to claim 12, characterized in that, when a zone is in low illumination conditions, the processing means are adapted to: from the pixels infrared of this area, identify sub-areas of this area, detecting an average surface flux of photons or electrons homogeneous in the infrared spectrum; for each sub-zone thus identified, comparing a predetermined infrared threshold (FiR th) and a quantity, called secondary quantity (FiR), representative of an average surface flux of photons or electrons detected by the infrared pixels of this sub-area. zone; when said secondary quantity is greater than the predetermined infrared threshold, superimpose a monochrome image obtained from the panchromatic pixels of this sub-zone and a false-color image obtained from the infrared pixels of this sub-zone.
[0015]
An image acquisition device (100; 400; 500) according to any one of the preceding claims, characterized in that the matrix (120; 420; 520) of elementary filters consists of an image projected by an optical system of projection.
[0016]
An image forming method implemented in a device (100; 400; 500) comprising a photocathode (120; 420; 520) adapted to convert an incident photon flux to an electron stream; sensor (130; 430; 530), characterized in that it comprises the following steps: filtering an initial photon flux, to provide said incident flux of photons, this filtering implementing a matrix of elementary filters (110; 410; 510) comprising primary color filters (R, G, B; Ye, Ma, Cy), a primary color filter not transmitting a portion of the visible spectrum complementary to said primary color, and filters transmitting the entirety visible spectrum, so-called panchromatic filters (W); calculating a magnitude, called the useful magnitude (F), to determine if at least one zone of the sensor is in conditions of low or high illumination, the useful magnitude being representative of an average surface flux of photons or electrons detected on a set of so-called panchromatic pixels of the sensor, each panchromatic pixel being associated with a panchromatic filter (W); only if said zone is in conditions of high illumination, forming a color image of said zone from the pixels of this zone associated with primary color filters (R, G, B, Ye, Ma, Cy).
类似技术:
公开号 | 公开日 | 专利标题
EP3198625B1|2018-12-12|Bimode image acquisition device with photocathode
CA2909554C|2021-08-10|Device for acquiring bimodal images
US9497370B2|2016-11-15|Array camera architecture implementing quantum dot color filters
JP2011199798A|2011-10-06|Physical information obtaining apparatus, solid-state imaging apparatus, and physical information obtaining method
EP2693242A2|2014-02-05|Optical filter structure in the visible and/or infrared
EP3387824B1|2020-09-30|System and method for acquiring visible and near infrared images by means of a single matrix sensor
EP2636068A1|2013-09-11|Monolithic multispectral visible-and-infrared imager
EP3155660B1|2019-01-09|Bispectral matrix sensor and method for manufacturing same
US10559615B2|2020-02-11|Methods for high-dynamic-range color imaging
CN106461829A|2017-02-22|Optical filter, solid-state imaging apparatus, and electronic device
US10609361B2|2020-03-31|Imaging systems with depth detection
US20180359431A1|2018-12-13|Combined visible and infrared image sensor incorporating selective infrared optical filter
EP3685573B1|2021-04-21|Bayer matrix image sensor
JP2019165447A|2019-09-26|Solid-state imaging apparatus and imaging system
FR3026227A1|2016-03-25|DEVICE FOR ACQUIRING 3D IMAGES
FR3040798A1|2017-03-10|PLENOPTIC CAMERA
EP3024011A1|2016-05-25|System for collecting low-light images comprising a lens having a phase and/or amplitude filter
FR3056060B1|2019-07-05|CAMERA ADAPTED TO WORK CONTINUOUSLY IN A RADIOACTIVE ENVIRONMENT.
FR3059823B1|2019-08-23|IMPROVED MULTISPECTRAL DETECTION DEVICE
FR2845487A1|2004-04-09|Light collection system, especially for use with an optical spectrometer system has first and second mirrors that respectively collect the light emitted from the source and focus it on the second mirror and a detector
FR2968877A1|2012-06-15|Image sensor for detecting color on surface of substrate semiconductor, has pixels arranged in rows adjacent to each other based on pixel width, where pixels of adjacent rows are being offset relative to each other than half of pixel length
Bruna et al.2010|Notions about optics and sensors
FR2516705A1|1983-05-20|PHOTOELECTRIC DETECTION STRUCTURE
FR3054893A1|2018-02-09|FILTERING DEVICE FOR DETECTING, IN AN OPTICAL SIGNAL, AT LEAST ONE WAVELENGTH EMITTED BY A LASER AND A BAND OF INFRARED WAVE LENGTHS
FR2536616A1|1984-05-25|IMAGE ANALYZING DEVICE FOR COLOR TELEVISION CAMERA
同族专利:
公开号 | 公开日
US9972471B2|2018-05-15|
IL251222A|2020-11-30|
US20170287667A1|2017-10-05|
EP3198625B1|2018-12-12|
CN106716592A|2017-05-24|
IL251222D0|2017-05-29|
CN106716592B|2019-03-05|
CA2961118A1|2016-03-31|
JP2017533544A|2017-11-09|
SG11201702126UA|2017-04-27|
FR3026223B1|2016-12-23|
EP3198625A1|2017-08-02|
WO2016046235A1|2016-03-31|
JP6564025B2|2019-08-21|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
JPH03112041A|1989-09-27|1991-05-13|Hamamatsu Photonics Kk|Color image tube|
WO1995006388A1|1993-08-20|1995-03-02|Intevac, Inc.|Life extender and bright light protection for cctv camera system with image intensifier|
GB2302444A|1995-06-15|1997-01-15|Orlil Ltd|Colour image intensifier|
US20040036013A1|2002-08-20|2004-02-26|Northrop Grumman Corporation|Method and system for generating an image having multiple hues|
US5233183A|1991-07-26|1993-08-03|Itt Corporation|Color image intensifier device and method for producing same|
GB2273812B|1992-12-24|1997-01-08|Motorola Inc|Image enhancement device|
AU709025B2|1995-10-31|1999-08-19|Benjamin T Gravely|Imaging system|
KR100214885B1|1996-02-29|1999-08-02|윤덕용|Flat panel display device using light emitting device and electron multiplier|
US5914749A|1998-03-31|1999-06-22|Intel Corporation|Magenta-white-yellow color system for digital image sensor applications|
US6456793B1|2000-08-03|2002-09-24|Eastman Kodak Company|Method and apparatus for a color scannerless range imaging system|
US20030147002A1|2002-02-06|2003-08-07|Eastman Kodak Company|Method and apparatus for a color sequential scannerless range imaging system|
JP4311988B2|2003-06-12|2009-08-12|アキュートロジック株式会社|Color filter for solid-state image sensor and color image pickup apparatus using the same|
US7123298B2|2003-12-18|2006-10-17|Avago Technologies Sensor Ip Pte. Ltd.|Color image sensor with imaging elements imaging on respective regions of sensor elements|
JP4678172B2|2004-11-22|2011-04-27|株式会社豊田中央研究所|Imaging device|
WO2006138599A2|2005-06-17|2006-12-28|Infocus Corporation|Synchronization of an image producing element and a light color modulator|
CN1971927B|2005-07-21|2012-07-18|索尼株式会社|Physical information acquiring method, physical information acquiring device and semiconductor device|
US8139130B2|2005-07-28|2012-03-20|Omnivision Technologies, Inc.|Image sensor with improved light sensitivity|
US7688368B2|2006-01-27|2010-03-30|Eastman Kodak Company|Image sensor with improved light sensitivity|
KR20070115243A|2006-06-01|2007-12-05|삼성전자주식회사|Apparatus for photographing image and operating method for the same|
US8118226B2|2009-02-11|2012-02-21|Datalogic Scanning, Inc.|High-resolution optical code imaging using a color imager|
CN202696807U|2012-07-20|2013-01-23|合肥汉翔电子科技有限公司|Microfilter color shimmer imaging mechanism|
JP5981820B2|2012-09-25|2016-08-31|浜松ホトニクス株式会社|Microchannel plate, microchannel plate manufacturing method, and image intensifier|
FR3004882B1|2013-04-17|2015-05-15|Photonis France|DEVICE FOR ACQUIRING BIMODE IMAGES|
US9503623B2|2014-06-03|2016-11-22|Applied Minds, Llc|Color night vision cameras, systems, and methods thereof|FR3045263B1|2015-12-11|2017-12-08|Thales Sa|SYSTEM AND METHOD FOR ACQUIRING VISIBLE AND NEAR INFRARED IMAGES USING A SINGLE MATRIX SENSOR|
JP2017112401A|2015-12-14|2017-06-22|ソニー株式会社|Imaging device, apparatus and method for image processing, and program|
US10197441B1|2018-01-30|2019-02-05|Applied Materials Israel Ltd.|Light detector and a method for detecting light|
US11268849B2|2019-04-22|2022-03-08|Applied Materials Israel Ltd.|Sensing unit having photon to electron converter and a method|
法律状态:
2015-09-30| PLFP| Fee payment|Year of fee payment: 2 |
2016-03-25| PLSC| Search report ready|Effective date: 20160325 |
2016-09-28| PLFP| Fee payment|Year of fee payment: 3 |
2017-09-29| PLFP| Fee payment|Year of fee payment: 4 |
2018-05-04| RM| Correction of a material error|Effective date: 20180328 |
2018-09-28| PLFP| Fee payment|Year of fee payment: 5 |
2019-09-30| PLFP| Fee payment|Year of fee payment: 6 |
2021-06-11| ST| Notification of lapse|Effective date: 20210506 |
优先权:
申请号 | 申请日 | 专利标题
FR1458903A|FR3026223B1|2014-09-22|2014-09-22|APPARATUS FOR ACQUIRING PHOTOCATHODE BIMODE IMAGES.|FR1458903A| FR3026223B1|2014-09-22|2014-09-22|APPARATUS FOR ACQUIRING PHOTOCATHODE BIMODE IMAGES.|
CA2961118A| CA2961118A1|2014-09-22|2015-09-22|Bimode image acquisition device with photocathode|
CN201580050815.3A| CN106716592B|2014-09-22|2015-09-22|Dual mode image acquisition device with photocathode|
US15/512,253| US9972471B2|2014-09-22|2015-09-22|Bimode image acquisition device with photocathode|
JP2017515796A| JP6564025B2|2014-09-22|2015-09-22|Image acquisition apparatus and image forming method|
PCT/EP2015/071789| WO2016046235A1|2014-09-22|2015-09-22|Bimode image acquisition device with photocathode|
SG11201702126UA| SG11201702126UA|2014-09-22|2015-09-22|Bimode image acquisition device with photocathode|
EP15766546.4A| EP3198625B1|2014-09-22|2015-09-22|Bimode image acquisition device with photocathode|
IL251222A| IL251222A|2014-09-22|2017-03-16|Bimode image acquisition device with photocathode|
[返回顶部]